perm filename EPISTE[S79,JMC] blob sn#525135 filedate 1980-07-20 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00004 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.require "memo.pub[let,jmc]" source
C00007 00003		We believe that an effective knowledge seeker in worlds
C00010 00004	.skip 1
C00011 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source;
.CB APPROACHING EPISTEMOLOGY FROM ARTIFICIAL INTELLIGENCE

.item←0
#. Introduction

	Artificial intelligence (or cognology) is a scientific
discipline concerned with making machines behave intelligently.
To behave intelligently machines must acquire knowledge, represent
it internally and use is to guide action.  Therefore, artificial
intelligence must take some attitude towards questions of
epistemology, and this paper applies experience
in artificial intelligence to doing epistemology better.

	It would seem that artificial intelligence could advance very far
with a naive common sense epistemology and could postpone some
of the deeper problems that have concerned philosophers such as
the justification for induction.  Unfortunately, it has not proved
possible to formulate even a naive epistemology so as to be usable
by computers. Indeed many matters that
concern philosophers, such as the relation between sense data
and phenomena in the world, require some resolution in order to build
even a simple computer program that interacts intelligently with
the physical world.

	On the other hand, trying to construct a
common sense epistemology leads to conclusions about epistemology
in general, and this paper contains some of mine.  They are too
similar to views that I held before studying artificial intelligence
to justify a claim that they are forced by that study.  Moreover, other
AI researchers have different opinions.


#. Formalized knowledge seekers in formalized worlds

	We propose that epistemology be studied by methods
like those of metamathematics.

	An %2epistemological world system%1 is defined by a world
and a knowledge seeker within the world.  In general, the world
is a causal system whose state changes in time according to prescribed
rules.  We may imagine it to be a finite automaton, but causal systems
with continuous time will eventually have to be treated.  We will
be less interested in general automata than those with specific
interesting kinds of structure resembling those we ascribe to the
real world, e.g. collections of interacting bodies in a 3-dimensional space.

	The knowledge seeker ⊗KS is a subsystem in the world.  Normally
we shall suppose it capable of executing computer programs and will
suppose that its knowledge seeking strategy is embodied in such a
program.

	We will be concerned with a set of possible world systems
and a formal language for expressing facts about particular world
systems and particular states of such systems.  We want to study
the effectiveness of knowledge seeking strategies in learning facts
about the world and its state.  This is to be done by prescribing
a relation ⊗B(state,sentence) between states of the knowledge
seeker and sentences of ⊗L.  ⊗B(state, sentence) is interpreted
as asserting that ⊗KS believes ⊗sentence when it is in ⊗state. 

	We believe that an effective knowledge seeker in worlds
that resemble our own will be less atomistic than is commonly
hypothesized.  Instead of verifying particular facts on the
basis of evidence, it will work with whole theories resembling
those of science.  A theory postulates many entities, some of
which are not directly observable.  Particular experiences
lead to its confirmation, disconfirmation and modification in
rather complicated ways.  This is commonly accepted with regard
to scientific theories but we believe it is also characteristic
of common sense information gathering as well.

	An analogy with cryptography may be helpful.  Confronted
with a simple substitution cipher, one tries various pairings of
cryptogram and plaintext letters motivated by suggestive but
not conclusive clues.  Plausible patterns suggest guessing more letters
and implausible ones suggest modifying existing guesses.  Finally
the whole cryptogram is revealed.  One never has a proof that the
solution is the only one possible, but empirically one almost always
comes up with the precise solution intended by the poser of the
cryptogram and the rare discrepancies involve single letters
usually appearing once in the cryptogram.  I know of no example
of a substantial cryptogram admitting more than one solution.
We suggest that many scientific and other real world problems
have a similar structure.  The solution is complicated, and
there is no proof of the uniqueness of the solution,
but no other solution can be found.

	A related phenomenon involves the postulation of unobserved
or partially observed entities.

  A disease, a unicorn.

epistemological relativity

Moore's gedanken experiments

the self-confirmation of induction is interesting
.skip 1
.begin verbatim
John McCarthy
Artificial Intelligence Laboratory
Computer Science Department
Stanford University
Stanford, California 94305

ARPANET: MCCARTHY@SU-AI
.end

.turn on "{"
%7This version of
EPISTE[S79,JMC]
translated for printing (by PUB) at {time} on {date}.%1